Goto

Collaborating Authors

 node classification accuracy


GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels

Neural Information Processing Systems

DiscGraph set captures wide-range and diverse graph data distribution discrepancies through a discrepancy measurement function, which exploits the outputs of GNNs related to latent node embeddings and node class predictions.


Supplement

Neural Information Processing Systems

Wehavetried using the same number of layers for GRAND/BLEND as in their paper.However, the test clean accuracy is low. For example, grb-cora [7] has node feature size of 302 while the original Cora dataset has node feature size of 1433.


GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels

Neural Information Processing Systems

Evaluating the performance of graph neural networks (GNNs) is an essential task for practical GNN model deployment and serving, as deployed GNNs face significant performance uncertainty when inferring on unseen and unlabeled test graphs, due to mismatched training-test graph distributions.




Adaptive Node Feature Selection For Graph Neural Networks

Azizpour, Ali, Navarro, Madeline, Segarra, Santiago

arXiv.org Artificial Intelligence

We propose an adaptive node feature selection approach for graph neural networks (GNNs) that identifies and removes unnecessary features during training. The ability to measure how features contribute to model output is key for interpreting decisions, reducing dimensionality, and even improving performance by eliminating unhelpful variables. However, graph-structured data introduces complex dependencies that may not be amenable to classical feature importance metrics. Inspired by this challenge, we present a model-and task-agnostic method that determines relevant features during training based on changes in validation performance upon permuting feature values. We theoretically motivate our intervention-based approach by characterizing how GNN performance depends on the relationships between node data and graph structure. Not only do we return feature importance scores once training concludes, we also track how relevance evolves as features are successively dropped. We can therefore monitor if features are eliminated effectively and also evaluate other metrics with this technique. Our empirical results verify the flexibility of our approach to different graph architectures as well as its adaptability to more challenging graph learning settings. Graphs provide powerful yet well-understood representations of complex data (Bronstein et al., 2017).





Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach

Neural Information Processing Systems

Graph neural networks (GNNs) are vulnerable to adversarial perturbations, including those that affect both node features and graph topology. This paper investigates GNNs derived from diverse neural flows, concentrating on their connection to various stability notions such as BIBO stability, Lyapunov stability, structural stability, and conservative stability. We argue that Lyapunov stability, despite its common use, does not necessarily ensure adversarial robustness. Inspired by physics principles, we advocate for the use of conservative Hamiltonian neural flows to construct GNNs that are robust to adversarial attacks. The adversarial robustness of different neural flow GNNs is empirically compared on several benchmark datasets under a variety of adversarial attacks.